68,767 research outputs found

    A generalization of d'Alembert formula

    Full text link
    In this paper we find a closed form of the solution for the factored inhomogeneous linear equation \begin{equation*} \prod_{j=1}^{n}(\frac{\hbox{d}}{\hbox{d}t}-A_{j}) u(t) =f(t). \end{equation*} Under the hypothesis A1,A2,...,AnA_{1},A_{2}, ..., A_{n} are infinitesimal generators of mutually commuting strongly continuous semigroups of bounded linear operators on a Banach space XX. Here we do not assume that AjA_{j}s are distinct and we offer the computational method to get explicit solutions of certain partial differential equations.Comment: 17 page

    The origin of quantum nonlocality

    Full text link
    Quantum entanglement is the quintessential characteristic of quantum mechanics and the basis for quantum information processing. When one of two maximally entangled particles is measured, without measurement the state of another one is determined simultaneously no matter how far the two particles is from each other. How can these phenomena take place since no object can move faster than light speed in a vacuum? The key problem is due to the ignorance of the interaction between a particle and a quantum vacuum. Just like the case where a gun suffers recoil from its firing of a bullet, when a particle is created from the quantum vacuum, the vacuum will be somewhat "broken" correspondingly, which can be described by a shadow state in the vacuum. Through their shadows in the vacuum two quantum entangled particles can have a distance-independent instantaneous interaction with each other. Quantum teleportation, quantum swap, and wave function collapse are explained in a similar way. Quantum object can be interpreted as a composite made up of a particle and the shadowed quantum vacuum which is responsible for the wave characteristic of the particle wave duality. The quantum vacuum is not only the origin of all possible kinds of particles, but also the origin and the core of Eastern mystics.Comment: 6 pages, 2 figure

    New Physics Searches with Higgs-photon associated production at the Higgs Factory

    Full text link
    The Higgs factory is designed for precise measurement of Higgs characters and search for new physics. In this paper we propose that e+e−→γhe^+e^- \to \gamma h process could be a useful channel for new physics, which is normally expressed model independently by effective field theory. We calculate the cross section in both the Standard Model and effective field theory approach, and find that the new physics effects of γh\gamma h have only two degrees of freedom, much fewer than the Higgsstrahlung process. This point could be used to reduce the degeneracies of Wilson coefficients. We also calculated for the first time the 2σ\sigma bounds of γh\gamma h at the Higgs factory, and prove that γh\gamma h is more sensitive to some dimension-6 operators than the current experimental data. In the optimistic scenario new physics effects may be observed at the CEPC or FCC-ee after the first couple of years of their run.Comment: 5 pages, 3 figures, submitted to Chinese Physics

    Mass minimizers and concentration for nonlinear Choquard equations in RN\R^N

    Full text link
    In this paper, we study the existence of minimizers to the following functional related to the nonlinear Choquard equation: E(u)=\frac{1}{2}\ds\int_{\R^N}|\nabla u|^2+\frac{1}{2}\ds\int_{\R^N}V(x)|u|^2-\frac{1}{2p}\ds\int_{\R^N}(I_\al*|u|^p)|u|^p on $\widetilde{S}(c)=\{u\in H^1(\R^N)|\ \int_{\R^N}V(x)|u|^2<+\infty,\ |u|_2=c,c>0\},where where N\geq1 \al\in(0,N),, \frac{N+\alpha}{N}\leq p<\frac{N+\alpha}{(N-2)_+}and and I_\al:\R^N\rightarrow\RistheRieszpotential.Wepresentsharpexistenceresultsfor is the Riesz potential. We present sharp existence results for E(u)constrainedon constrained on \widetilde{S}(c)when when V(x)\equiv0forall for all \frac{N+\alpha}{N}\leq p<\frac{N+\alpha}{(N-2)_+}.Forthemasscriticalcase. For the mass critical case p=\frac{N+\alpha+2}{N},weshowthatif, we show that if 0\leq V(x)\in L_{loc}^{\infty}(\R^N)and and \lim\limits_{|x|\rightarrow+\infty}V(x)=+\infty,thenmassminimizersexistonlyif, then mass minimizers exist only if 0<c<c_*=|Q|_2andconcentrateattheflattestminimumof and concentrate at the flattest minimum of Vas as capproaches approaches c_*frombelow,where from below, where Qisagroundstatesolutionof is a groundstate solution of -\Delta u+u=(I_\alpha*|u|^{\frac{N+\alpha+2}{N}})|u|^{\frac{N+\alpha+2}{N}-2}uin in \R^N$

    Bidirectional Recurrent Neural Networks for Medical Event Detection in Electronic Health Records

    Full text link
    Sequence labeling for extraction of medical events and their attributes from unstructured text in Electronic Health Record (EHR) notes is a key step towards semantic understanding of EHRs. It has important applications in health informatics including pharmacovigilance and drug surveillance. The state of the art supervised machine learning models in this domain are based on Conditional Random Fields (CRFs) with features calculated from fixed context windows. In this application, we explored various recurrent neural network frameworks and show that they significantly outperformed the CRF models.Comment: In proceedings of NAACL HLT 201

    Maximal hypersurfaces over exterior domains

    Full text link
    In this paper, we study the exterior problem for the maximal surface equation. We obtain the precise asymptotic behavior of the exterior solution at infinity. And we prove that the exterior Dirichlet problem is uniquely solvable given admissible boundary data and prescribed asymptotic behavior at infinity.Comment: 24 pages, 1 figur

    Calibrating Structured Output Predictors for Natural Language Processing

    Full text link
    We address the problem of calibrating prediction confidence for output entities of interest in natural language processing (NLP) applications. It is important that NLP applications such as named entity recognition and question answering produce calibrated confidence scores for their predictions, especially if the system is to be deployed in a safety-critical domain such as healthcare. However, the output space of such structured prediction models is often too large to adapt binary or multi-class calibration methods directly. In this study, we propose a general calibration scheme for output entities of interest in neural-network based structured prediction models. Our proposed method can be used with any binary class calibration scheme and a neural network model. Additionally, we show that our calibration method can also be used as an uncertainty-aware, entity-specific decoding step to improve the performance of the underlying model at no additional training cost or data requirements. We show that our method outperforms current calibration techniques for named-entity-recognition, part-of-speech and question answering. We also improve our model's performance from our decoding step across several tasks and benchmark datasets. Our method improves the calibration and model performance on out-of-domain test scenarios as well.Comment: ACL 2020; 9 pages + 4 page appendi

    Multi-modal Face Pose Estimation with Multi-task Manifold Deep Learning

    Full text link
    Human face pose estimation aims at estimating the gazing direction or head postures with 2D images. It gives some very important information such as communicative gestures, saliency detection and so on, which attracts plenty of attention recently. However, it is challenging because of complex background, various orientations and face appearance visibility. Therefore, a descriptive representation of face images and mapping it to poses are critical. In this paper, we make use of multi-modal data and propose a novel face pose estimation method that uses a novel deep learning framework named Multi-task Manifold Deep Learning M2DLM^2DL. It is based on feature extraction with improved deep neural networks and multi-modal mapping relationship with multi-task learning. In the proposed deep learning based framework, Manifold Regularized Convolutional Layers (MRCL) improve traditional convolutional layers by learning the relationship among outputs of neurons. Besides, in the proposed mapping relationship learning method, different modals of face representations are naturally combined to learn the mapping function from face images to poses. In this way, the computed mapping model with multiple tasks is improved. Experimental results on three challenging benchmark datasets DPOSE, HPID and BKHPD demonstrate the outstanding performance of M2DLM^2DL

    Histogram Transform-based Speaker Identification

    Full text link
    A novel text-independent speaker identification (SI) method is proposed. This method uses the Mel-frequency Cepstral coefficients (MFCCs) and the dynamic information among adjacent frames as feature sets to capture speaker's characteristics. In order to utilize dynamic information, we design super-MFCCs features by cascading three neighboring MFCCs frames together. The probability density function (PDF) of these super-MFCCs features is estimated by the recently proposed histogram transform~(HT) method, which generates more training data by random transforms to realize the histogram PDF estimation and recedes the commonly occurred discontinuity problem in multivariate histograms computing. Compared to the conventional PDF estimation methods, such as Gaussian mixture models, the HT model shows promising improvement in the SI performance.Comment: Technical Repor

    Neural Semantic Encoders

    Full text link
    We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read}, compose and write operations. NSE can also access multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.Comment: Accepted in EACL 2017, added: comparison with NTM, qualitative analysis and memory visualizatio
    • …
    corecore